Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 1 de 1
Filter
Add filters

Main subject
Language
Document Type
Year range
1.
researchsquare; 2023.
Preprint in English | PREPRINT-RESEARCHSQUARE | ID: ppzbmed-10.21203.rs.3.rs-3034810.v1

ABSTRACT

The COVID-19 pandemic has placed an enormous strain on healthcare systems worldwide, leading to a need for more efficient methods of identifying the severity of COVID-19 patients to efficiently allocate resources. Existing Xray processing models for identification of COVID-19 are either highly complicated or showcase lower efficiency when applied for real-time scenarios. To overcome these issues, this paper presents a novel approach for identifying the severity of COVID-19 patients using an augmented multimodal X-ray feature representation model. The proposed model combines X-ray images, clinical data, and demographic information to create a robust representation of individual patient condition. The collected information is converted into multidomain feature sets, including frequency, Gabor, Wavelet and entropy components. A customized deep neural network is trained on this representation to predict the severity level of COVID-19 patients. To evaluate the performance of the proposed model, we used a dataset of X-ray images and clinical data from COVID-19 patients. Our results demonstrate that the proposed model outperforms existing methods for identifying COVID-19 severity levels, achieving an accuracy of 98.5% on multiple dataset samples. The proposed model's performance was observed to be promising in terms of precision, recall and delay, thus has the potential to aid in the early identification and effective management of severe COVID-19 cases, thus contributing to the global effort to combat the COVID-19 pandemic under clinical use cases.


Subject(s)
COVID-19
SELECTION OF CITATIONS
SEARCH DETAIL